Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Image generation based on conditional-Wassertein generative adversarial network
GUO Maozu, YANG Qiannan, ZHAO Lingling
Journal of Computer Applications    2021, 41 (5): 1432-1437.   DOI: 10.11772/j.issn.1001-9081.2020071138
Abstract369)      PDF (2259KB)(478)       Save
Generative Adversarial Network (GAN) can automatically generate target images, and is of great significance to the generation of building arrangement of similar blocks. However, there are problems in the existing process of model training such as the low accuracy of generated images, the mode collapse, and the too low efficiency of model training. To solve these problems, a Conditional-Wassertein Generative Adversarial Network (C-WGAN) model for image generation was proposed. First, the feature correspondence between the real sample and the target sample was needed to be identified by this model, and then the target sample was generated according to the identified feature correspondence. The Wassertein distance was used to measure the distance between the distributions of two image features in the model, the GAN training environment was stablized, and mode collapse was avoided during model training, so as to improve the accuracy of the generated images and the training efficiency. Experimental results show that compared with the original Conditional Generative Adversarial Network (CGAN) and the pix2pix models, the proposed model has the Peak Signal-to-Noise Ratio (PSNR) increased by 6.82% and 2.19% at most respectively; in the case of the same number of training rounds, the proposed model reaches the convergence state faster. It can be seen that the proposed model can not only effectively improve the accuracy of image generation, but also increase the convergence speed of the network.
Reference | Related Articles | Metrics
Activity semantic recognition method based on joint features and XGBoost
GUO Maozu, ZHANG Bin, ZHAO Lingling, ZHANG Yu
Journal of Computer Applications    2020, 40 (11): 3159-3165.   DOI: 10.11772/j.issn.1001-9081.2020030301
Abstract332)      PDF (2125KB)(311)       Save
The current research on the activity semantic recognition only extracts the sequence features and periodic features on the time dimension, and lacks deep mining of spatial information. To solve these problems, an activity semantic recognition method based on joint features and eXtreme Gradient Boosting (XGBoost) was proposed. Firstly, the activity periodic features in the temporal information as well as the latitude and longitude features in the spatial information were extracted. Then the latitude and longitude information was used to extract the heat features of the spatial region based on the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. The user activity semantics was represented by the feature vectors combined with these features. Finally, the activity semantic recognition model was established through the XGBoost algorithm in the integrated learning method. On two public check-in datasets of FourSquare, the model based on joint features has a 28 percentage points improvement in recognition accuracy compared to the model with only temporal features, and compared with the Context-Aware Hybrid (CAH) method and the Spatial Temporal Activity Preference (STAP) method, the proposed method has the recognition accuracy increased by 30 percentage points and 5 percentage points respectively. Experimental results show that the proposed method is more accurate and effective on the problem of activity semantic recognition compared to the the comparison methods.
Reference | Related Articles | Metrics